368 research outputs found

    Numerical Methods for Parasitic Extraction of Advanced Integrated Circuits

    Get PDF
    FFinFETs, also known as Fin Field Effect Transistors, are a type of non-planar transistors used in the modern integrated circuits. Fast and accurate parasitic capacitance and resistance extraction is crucial in the design and verification of Fin- FET integrated circuits. Though there are wide varieties of techniques available for parasitic extraction, FinFETs still pose tremendous challenges due to the complex geometries and user model of FinFETs. In this thesis, we propose three practical techniques for parasitic extraction of FinFET integrated circuits. The first technique we propose is to solve the dilemma that foundries and IP vendors face to protect the sensitive information which is prerequisite for accurate parasitic extraction. We propose an innovative solution to the challenge, by building a macro model around any region in 2D/3D on a circuit where foundries or IP vendors wish to hide information, yet the macro model allows accurate capacitance extraction inside and outside of the region. The second technique we present is to reduce the truncation error introduced by the traditional Neumann boundary condition. We make a fundamental contribution to the theory of field solvers by proposing a class of absorbing boundary conditions, which when placed on the boundary of the numerical region, will act as if the region extends to infinity. As a result, we can significantly reduce the size of the numerical region, which in turn reduces the run time without sacrificing accuracy. Finally, we improve the accuracy and efficiency of resistance extraction for Fin-FET with non-orthogonal resistivity interface through FVM and IFEM. The performance of FVM is comparable to FEM but with better stability since the conservation law is guaranteed. The IFEM is even better in both efficiency and mesh generation cost than other methods, including FDM, FEM and FVM. The proposed methods are based on rigorous mathematical derivations and verified through experimental results on practical example

    Machine learning for identifying patterns in human gait:Classification of age and clinical groups

    Get PDF
    Every human being walks in a different way. Emerging evidence suggests that the signature or the pattern of gait is a sensitive biomarker of old age and is related to impairments in mobility and cognition. While clinical observation to analyse the patterns of gait is helpful, it still lacks sufficient accuracy and specificity for early identification of health problems. Limb and trunk accelerations make up the patterns of gait, which can be recorded by wearable sensors, producing a wealth of movement data. Machine learning is an efficient data science method to analyse sensor-generated data. This thesis used machine learning methods to classify populations based on age, fall history, or cognitive status. We quantified spatial-temporal gait characteristics and dynamic gait outcomes extracted from 3D-accelerometer time-series signals. We compared classification performances to determine which machine learning models were optimal for a given comparison. As hypothesized, gait characteristics differed between age groups, between fallers and non-fallers, therefore these groups could be accurately classified by inputting gait characteristics to classification models. Additionally, geriatric patients with or without cognitive impairments could have more diseases or underlying medical conditions; therefore the combination of gait characteristics and clinical tests could accurately classify these groups rather than inputting only gait or only clinical characteristics. In conclusion, using machine learning to analyse gait patterns can enhance our understanding of how age and pathology affect gait and can support clinicians to diagnose and eventually treat those with age-related motor and cognitive impairments

    Numerical Methods for Parasitic Extraction of Advanced Integrated Circuits

    Get PDF
    FFinFETs, also known as Fin Field Effect Transistors, are a type of non-planar transistors used in the modern integrated circuits. Fast and accurate parasitic capacitance and resistance extraction is crucial in the design and verification of Fin- FET integrated circuits. Though there are wide varieties of techniques available for parasitic extraction, FinFETs still pose tremendous challenges due to the complex geometries and user model of FinFETs. In this thesis, we propose three practical techniques for parasitic extraction of FinFET integrated circuits. The first technique we propose is to solve the dilemma that foundries and IP vendors face to protect the sensitive information which is prerequisite for accurate parasitic extraction. We propose an innovative solution to the challenge, by building a macro model around any region in 2D/3D on a circuit where foundries or IP vendors wish to hide information, yet the macro model allows accurate capacitance extraction inside and outside of the region. The second technique we present is to reduce the truncation error introduced by the traditional Neumann boundary condition. We make a fundamental contribution to the theory of field solvers by proposing a class of absorbing boundary conditions, which when placed on the boundary of the numerical region, will act as if the region extends to infinity. As a result, we can significantly reduce the size of the numerical region, which in turn reduces the run time without sacrificing accuracy. Finally, we improve the accuracy and efficiency of resistance extraction for Fin-FET with non-orthogonal resistivity interface through FVM and IFEM. The performance of FVM is comparable to FEM but with better stability since the conservation law is guaranteed. The IFEM is even better in both efficiency and mesh generation cost than other methods, including FDM, FEM and FVM. The proposed methods are based on rigorous mathematical derivations and verified through experimental results on practical example

    Quasi-optimal Learning with Continuous Treatments

    Full text link
    Many real-world applications of reinforcement learning (RL) require making decisions in continuous action environments. In particular, determining the optimal dose level plays a vital role in developing medical treatment regimes. One challenge in adapting existing RL algorithms to medical applications, however, is that the popular infinite support stochastic policies, e.g., Gaussian policy, may assign riskily high dosages and harm patients seriously. Hence, it is important to induce a policy class whose support only contains near-optimal actions, and shrink the action-searching area for effectiveness and reliability. To achieve this, we develop a novel \emph{quasi-optimal learning algorithm}, which can be easily optimized in off-policy settings with guaranteed convergence under general function approximations. Theoretically, we analyze the consistency, sample complexity, adaptability, and convergence of the proposed algorithm. We evaluate our algorithm with comprehensive simulated experiments and a dose suggestion real application to Ohio Type 1 diabetes dataset.Comment: The first two authors contributed equally to this wor

    Driving maneuvers prediction based on cognition-driven and data-driven method

    Full text link
    Advanced Driver Assistance Systems (ADAS) improve driving safety significantly. They alert drivers from unsafe traffic conditions when a dangerous maneuver appears. Traditional methods to predict driving maneuvers are mostly based on data-driven models alone. However, existing methods to understand the driver's intention remain an ongoing challenge due to a lack of intersection of human cognition and data analysis. To overcome this challenge, we propose a novel method that combines both the cognition-driven model and the data-driven model. We introduce a model named Cognitive Fusion-RNN (CF-RNN) which fuses the data inside the vehicle and the data outside the vehicle in a cognitive way. The CF-RNN model consists of two Long Short-Term Memory (LSTM) branches regulated by human reaction time. Experiments on the Brain4Cars benchmark dataset demonstrate that the proposed method outperforms previous methods and achieves state-of-the-art performance

    Pathway Interaction Network Analysis Identifies Dysregulated Pathways in Human Monocytes Infected by Listeria monocytogenes

    Get PDF
    In our study, we aimed to extract dysregulated pathways in human monocytes infected by Listeria monocytogenes (LM) based on pathway interaction network (PIN) which presented the functional dependency between pathways. After genes were aligned to the pathways, principal component analysis (PCA) was used to calculate the pathway activity for each pathway, followed by detecting seed pathway. A PIN was constructed based on gene expression profile, protein-protein interactions (PPIs), and cellular pathways. Identifying dysregulated pathways from the PIN was performed relying on seed pathway and classification accuracy. To evaluate whether the PIN method was feasible or not, we compared the introduced method with standard network centrality measures. The pathway of RNA polymerase II pretranscription events was selected as the seed pathway. Taking this seed pathway as start, one pathway set (9 dysregulated pathways) with AUC score of 1.00 was identified. Among the 5 hub pathways obtained using standard network centrality measures, 4 pathways were the common ones between the two methods. RNA polymerase II transcription and DNA replication owned a higher number of pathway genes and DEGs. These dysregulated pathways work together to influence the progression of LM infection, and they will be available as biomarkers to diagnose LM infection

    Policy Learning for Individualized Treatment Regimes on Infinite Time Horizon

    Full text link
    With the recent advancements of technology in facilitating real-time monitoring and data collection, "just-in-time" interventions can be delivered via mobile devices to achieve both real-time and long-term management and control. Reinforcement learning formalizes such mobile interventions as a sequence of decision rules and assigns treatment arms based on the user's status at each decision point. In practice, real applications concern a large number of decision points beyond the time horizon of the currently collected data. This usually refers to reinforcement learning in the infinite horizon setting, which becomes much more challenging. This article provides a selective overview of some statistical methodologies on this topic. We discuss their modeling framework, generalizability, and interpretability and provide some use case examples. Some future research directions are discussed in the end

    Joint Compression and Watermarking Using Variable-Rate Quantization and its Applications to JPEG

    Get PDF
    In digital watermarking, one embeds a watermark into a covertext, in such a way that the resulting watermarked signal is robust to a certain distortion caused by either standard data processing in a friendly environment or malicious attacks in an unfriendly environment. In addition to the robustness, there are two other conflicting requirements a good watermarking system should meet: one is referred as perceptual quality, that is, the distortion incurred to the original signal should be small; and the other is payload, the amount of information embedded (embedding rate) should be as high as possible. To a large extent, digital watermarking is a science and/or art aiming to design watermarking systems meeting these three conflicting requirements. As watermarked signals are highly desired to be compressed in real world applications, we have looked into the design and analysis of joint watermarking and compression (JWC) systems to achieve efficient tradeoffs among the embedding rate, compression rate, distortion and robustness. Using variable-rate scalar quantization, an optimum encoding and decoding scheme for JWC systems is designed and analyzed to maximize the robustness in the presence of additive Gaussian attacks under constraints on both compression distortion and composite rate. Simulation results show that in comparison with the previous work of designing JWC systems using fixed-rate scalar quantization, optimum JWC systems using variable-rate scalar quantization can achieve better performance in the distortion-to-noise ratio region of practical interest. Inspired by the good performance of JWC systems, we then investigate its applications in image compression. We look into the design of a joint image compression and blind watermarking system to maximize the compression rate-distortion performance while maintaining baseline JPEG decoder compatibility and satisfying the additional constraints imposed by watermarking. Two watermarking embedding schemes, odd-even watermarking (OEW) and zero-nonzero watermarking (ZNW), have been proposed for the robustness to a class of standard JPEG recompression attacks. To maximize the compression performance, two corresponding alternating algorithms have been developed to jointly optimize run-length coding, Huffman coding and quantization table selection subject to the additional constraints imposed by OEW and ZNW respectively. Both of two algorithms have been demonstrated to have better compression performance than the DQW and DEW algorithms developed in the recent literature. Compared with OEW scheme, the ZNW embedding method sacrifices some payload but earns more robustness against other types of attacks. In particular, the zero-nonzero watermarking scheme can survive a class of valumetric distortion attacks including additive noise, amplitude changes and recompression for everyday usage

    Uplink multi-cell processing: Approximate sum capacity under a sum backhaul constraint

    Get PDF
    Abstract—This paper investigates an uplink multi-cell processing (MCP) model where the cell sites are linked to a central processor (CP) via noiseless backhaul links with limited capacity. A simple compress-and-forward scheme is employed, where the base-stations (BSs) quantize the received signals and send the quantized signals to the CP using distributed Wyner-Ziv compression. The CP decodes the quantization codewords first, then decodes the user messages as if the users and the CP form a virtual multiple-access channel. This paper formulates the problem of maximizing the overall sum rate under a sum backhaul constraint for such a setting. It is shown that setting the quantization noise levels to be uniform across the BSs maximizes the achievable sum rate under high signal-to-noise ratio (SNR). Further, for general SNR a low-complexity fixed-point iteration algorithm is proposed to optimize the quantization noise levels. This paper further shows that with uniform quantization noise levels, the compress-and-forward scheme with Wyner-Ziv compression already achieves a sum rate that is within a constant gap to the sum capacity of the uplink MCP model. The gap depends linearly on the number of BSs in the network but is independent of the SNR and the channel matrix. I
    • …
    corecore